Is a naturalistic account of reason compatible with its objectivity?

How can rational objectivism be reconciled with my principles of naturalism?

Greg Detre

Monday, 5th November, 2001

Dr Tasioulas

 

 

Perhaps part of the problem with current conceptions of rationality is that they may conflate a number of components, which together give rise to what we consider to be rationality. This is a specific application of a broad idea currently popular in cognitive science and philosophy of mind, which goes under many names, including modularity, the multiple drafts hypothesis (Dennett), the society of mind. I will adopt Dennett�s terminology of �multiple drafts�, used to refer to concurrent, restricted processes or modules, which interact, influence each other and compete for control of the system. At different times, different modules will dominate, allowing us to react flexibly in a variety of situations, and to literally �contain multitudes�[1].

Where do the incompatibilities between naturalism and objectivism about reason lie?

Taxonomies of rationality

There are an enormous number of different ways in which we could try and divide up rationality, some resulting in similar taxonomies.

Evolutionary age

The most evolutionarily ancient part of the brain is the hindbrain. Its broad design can be traced evolutionarily to our reptilian past. It links directly to the spinal cord, controlling basic functions like respiration, heartbeat etc. On top of this rests the �mammalian� mid-brain, at a slightly higher level. The highest level is the neocortex, a 6mm surface (in humans) that we share only with the most intelligent species like primates and dolphins. In conjunction with various sub-cortical areas, the cortex is responsible for all higher-level processing. It might be possible in the long-term future to try and separate the various roles played in reasoning by the different areas, though it seems likely that almost all of what we would really term rationality occurs in various parts of the cortex.

Forward/backward

There is an attractiveness to trying to divide our reasoning process into �forward� and �backward� reasoning, although the divide might be better termed �open/closed� or even �sub-conscious/conscious�.

To illustrate the difference, consider the game of chess. When faced with an early or mid-game board position, we can assume that neither a human or (foreseeable) computer player could consider all the legal remaining moves. However, with varying degrees of success (and subject to practice and understanding of the rules), a good human player will only have a handful of the best moves presented to his consciousness. The brain has somehow sub-consciously searched forwards from a starting position through an enormous space, of which our awareness is limited to only the most optimal solutions found.

In contrast, if in the post-game analysis your opponent points out his reasons for making or not making a given move (probably couched partly in terms of intentional mental states and partly in terms of the rules, mechanics and future moves of the game), then your comprehension and careful assessment of those reasons is working backwards.

Perhaps one way of putting this is to say that forwards reasoning is the process by which we effectively produce the reasons for one course/choice rather than another, although we are not literally producing the reasons, so much as noting the particular courses/choices for which good reasons exist. Backwards reasoning then is the process by which we then carefully elaborate and weigh up the respective reasons behind the available choice, and make our evaluation.

I am deliberately avoiding the categorical/hypothetical and theoretical/practical

solidify

 

However, when we watch a chess game or consider the move that our opponent has made,

we might try and decide why a given move was made. Given a close starting and finishing point, we try and establish why a move is good. We are evaluating a static situation. Indeed, whenever we consciously consider a move that has welled up out of our subconscious, we are using backwards reasoning.

 

surely even short leaps of reasoning/comprehension are no more �open� than the big searches though???

 

can look at a chess board and choose a �best� move. This is what I will term �forward reasoning�. It is directed towards an ultimate goal, that of winning the game.

 

By backward reasoning, I mean the process by which we assess the validity of arguments or consider retrospectively.

In constrast, forward reasoning concerns inference from premises, and any thinking where we are considering a potentially infinite number of unknown conclusions, trying to find the �right� one(s).

When we talk of the number of moves that a chess-player actively considers as being a handful or perhaps a few tens, we are talking of the number that our faculty of backwards reasoning can consider. However, in order to narrow down the enormous space of possible moves in the first place to these few most appropriate ones, we use forward reasoning. Our brain explores an enormous possibility space unconsciously, presenting only the best few for careful consideration.

Descartes� method of doubt provides the supreme example of forward reasoning - having elected to suspend all belief, he has to reason forwards from nowhere. He argues that the Cogito is not a syllogism, but rather serves as a premise. In a way, he knows that he is trying to justify his beliefs in the world as it manifestly appears to us, but the routes available by which we might attempt this are genuinely numberless and cannot be wholly captured by any schema. This is partly an aspect of language � as long as our arguments are based in language, and we have no real alternative, the combinatorial nature of syntax (which is what gives it its huge expressive power) is such that we can construct a genuinely infinite number of sentence-propositions, although of course most of these would be wholly nonsensical.

The situation in philosophy then is very different to that of a chess game, where the available/allowed moves are restricted by definitely-describable and easily-applied rules. In philosophy the only restrictions are of grammar (propositions must be expressible as sentences) and of plausibility (the more believable, i.e. justified, a proposition seems to us, the better). This may present different challenges for our reasoning systems, and it may be that we are better at either forward or backward reasoning in this situation.

Linguistic/non-linguistic

There is a definite appeal to the idea that some of our reasoning is non-linguistic in some fashion. This can be taken in several ways:

1.      In opposition to the Language of Thought Hypothesis, the processing of the brain cannot be described in language-like syntactic manipulation of symbols. This relates back to Smolensky�s stronger connectionist thesis.

2.      Sometimes we seem to be performing a task which we only afterwards express in propositional/linguistic terms, e.g. thinking in an imagistic fashion. When faced with two similar but not identical pictures, we are usually able to point out the difference between them. This seems like a clearly non-linguistic piece of reasoning (i.e. �accessing objectively valid truths�), since we certainly don�t appear to be representing every single component, object and shape propositionally and making comparisons.

Domains of rationality

domain in which you�re using it, ecological, e.g. theory of mind, mathematical, inferential �

Is it possible that the scientific and the logical domains of reasoning actually reflect fundamentally different rational processes?

Levels of rationality, minimal/idealised rationality, degrees of objectivity

If we could show that the demands placed on our reasoning are somehow less for naturalistic reasoning than for a priori or philosophical reasoning, then we could accept a naturalistic account of the limitations of reasoning, and avoid any attempts by philosophers to show that such arguments are self-refuting.

How can we understand this idea of levels of rationality? Is it simply that different domains of thought/discussion place greater or fewer demands on our intellect? That our brains just get them right more often? Or that they are somehow fundamentally, of their nature, easier to grasp, and their conclusions are easier to draw? Or even less satisfyingly, simply because we just have no choice but to take some things as given, otherwise our mental scaffold can never get off the (wholly skeptical) ground?

 

Cherniak provides one approach to considering degrees of rationality. In particular, he is attacking an idealised, (more or less) all-or-nothing conception of rationality that authors like Dennett, Davidson, Quine and Cohen support.

He accuses Davidson in �Psychology as philosophy� of saying that we need a �large degree of consistency� but actually arguing for ideal consistency, as does Quine�s translation policy. Inconsistency can be very difficult to unmask, if the logical relations are convoluted, and the inconsistency implicit � also, we tend to compartmentalise our beliefs, only comparing beliefs within a subset.

He attacks Dennett�s claim that �as we uncover apparent irrationality under an Intentional interpretation of an entity, our grounds for ascribing any beliefs at all wanes� � Cherniak argues that this is not the case for above-minimally rational creatures. I�m not so sure � a just-above minimally rational creature might seem to hold only a skeleton few set of beliefs. It�s a matter of degree, really.

Cherniak is looking to explain why intentional explanations are so successful as a means of predicting and understanding others� behaviour. By intentional explanations, he refers to the attribution of a cognitive systems of beliefs, desires, perceptions etc. He wants to show that too weak a conception of rationality is insufficient to explain the success of these intentional explanations, while too strong a conception is unable to either, for different reasons, as well as being wholly inapplicable to human beings in the real world.

His �minimal general conditions for rationality� have to lie between what he characterises as the �assent theory of belief� and the �ideal conditions of rationality�. The assent theory of belief considers that:

An agent believes �all and only those statements which he would affirm�, i.e. that believing a proposition consists simply in having an accompanying �feeling of assent�

Almost anything goes in such a caricatured theory, since it places no inherent consistency constraints, and no system by which inferences can be drawn from a given set of beliefs. As a result, it is quite unable to explain the predictive success of assuming intentionality in other people, since an agent is free to hold any beliefs he chooses � or at least, there is no systematic way of predicting, deducing or explaining which beliefs such an agent would have.

At the opposite end of the spectrum, Cherniak characterises the ideal general rationality criterion as:

An ideally-rational agent with a particular belief-desire set would:

make all of the sound inferences from his belief set

undertake all actions which would, according to his beliefs, tend to satisfy his desires

eliminate all inconsistencies that arise in his belief-set

This can be weakened slightly by modifying �all actions� to �most actions�, or perhaps just, �some non-empty set of actions�. However, it leaves no room for �sloppiness�. Sloppiness in Cherniak�s sense is almost a technical term, encompassing all of the factors which undermine our deductive ability. These include: laziness or carelessness; the difficulty of the deduction to be made (i.e. whether it is convoluted, indirect, requiring numerous unrelated-seeming premises); cognitive limitations (e.g. short-term memory); time constraints; and so fundamentally, the �finitary predicament�. We have finite-sized brains, a finite time available to us, and so we are restricted in the number and range of inferences we can consider, let alone draw.

The reason that these idealisations are made is that they allow us to simplify to a manageable level human behaviour sufficiently to formalise it in disciplines which deal with an enormous mass of human interactions, like economics. However,

The minimal rationality conditions he sets out are:

A minimally-rational agent with a particular belief-desire set would:

make some, but not necessarily all of the sound inferences from his belief set

attempt some, but not necessarily all, of those actions which would, according to his beliefs, tend to satisfy his desires (termed �apparently appropriate actions�)

not attempt most (but not necessarily all) of the actions which are inappropriate given that belief-desire set (the corresponding �negative rationality� requirement)

eliminate some (but not necessarily all) inconsistencies that arise in his belief-set

He is particularly keen to attack the idea that an agent actually believes (or infers, or can infer) all consequences of his beliefs.

I don't think that any of the philosophers whom Cherniak accuses of idealising rationality would explicitly accept the premise couched in those terms. It is obvious that that would require infinite resources, since it would probably require analysing some belief-sentences that could not be stated, let alone understood, within the agent�s lifetime.

For instance, take the Goldbach conjecture. We have a set of axioms, a conjectured inference, and yet we are unable to tell whether the inference follows deductively. Appeals to more prosaic cognitive limitations like short-term memory, carelessness or simply failing to take into account relevant premises by accident cannot explain our failure. In one sense, the problem is simply that the space of possible mathematical proofs is far far too big for us to be able to search through it. This feels like a simple concession to Cherniak�s statement of our finitary predicament. But I think it concedes far more than that � because the space in which we operate on a daily basis when acting rationally is almost always far huger than we can possible search. And if this is the case, then we simply are not rational in the way that Nagel requires.

the burden is on Nagel to explain why we can't do the things that a perfectly rational being can do � this requires a naturalistic approach � in order to make his position intelligible, he has to explain why maths isn't trivial, rather than trying to appeal to our intuitions about our ability to appreciate a mathematical or logical truth from the content of the proposition alone

if rationality amounts to searching a space, then we aren't rational � what about if we employ special heuristics in some way???

We also know which reasoning tasks are more difficult for humans than others, i.e. a weighting of deductive tasks with respect to their feasibility for the reasoner, so that we can guess which inferences are easier and more likely to be drawn = the theory of feasible inferences. He leaves it as an open question whether the most �obvious� inferences (like modus ponens) could be performed by any creature that qualifies as having beliefs.

Theory of human memory structure � helps you know which beliefs will be recalled when, e.g. whether the premises and rules are active at the time of consdiering a belief/conclusion. Thus, the activated belief subset is subject to a more stringent inference condition than the inactive belief set. Of course, I think it would be an even more powerful theory if it was couched in connectionist terms of association, rather than discrete subsets.

 

In determining whether a person ought to make a given inference in order to be pragmatically rational, you need to take into acount: the soundness of the inference; its feasibility; its apparent usefulness according to the person�s beliefs and desires.

Cherniak skips over the difference between conscious and unconscious inferences, and explicitly makes the assumption that our entire belief-desire system can be expressed a finite set of (logically-interpretable) sentences.

Others have taken a similar approach, notably Simon and Goldman.

Performance vs competence

Cohen distinguishes between performance and competence, which amounts to the distinction between how well you actually do something, and how well you are (potentially) capable of doing that task. In the same way that a superb sportsman may have an off-day because of lack of sleep, or nerves, our performance as reasoners (e.g. in various psychological tests) may be significantly inferior to our competence on a good day. This could be for a variety of quite prosaic reasons, such as the ones given above for the sportsman, as well as a few deeper and more specific ones.

give better examples of failures of performance

Cohen uses the example of English grammar. Although we regard most speakers of the English language as competent grammarians, insofar as they can distinguish grammatical and ungrammatical sentences very reliably, our performance varies a great deal � we often make grammatical errors during the slapdash flurry of a casual conversation, including grammatical errors that we could identify as errors. Cohen argues that the only way to define the grammar of a language is through the careful intuitions of what Rorty might term an ideal community of speakers of that language. The job of linguists is to systematically describe the sum of careful, reflected intuitive grammatical judgements given by just such a group of intelligent, fluent (probably native) speakers. Where obvious schisms do appear between groups of speakers, then we can say that we have distinguished dialects within the language.

In the same way, �the only way to tell that modus ponens and modus tollens are valid inference rules is that competent thinkers judge arguments of this form to be good ones. Note that this does not mean that competent thinkers will never be misled by the presentation of an argument and fail to recognize that modus tollens is an applicable inference rule.�

This leads convincingly towards a sort of reflective equilibrium view of human reasoning, where normative reasoning criteria are based on the intuitions of ordinary people. These intuitions have been systematically described in greater and greater detail, from Aristotle through to Frege, �constructing a coherent system of rules and principles by which those same people can, if they so choose, reason much more extensively and accurately than they would otherwise�. Thus, �human rationality, in the sense of the possession of a basic competence in judging inferences to be logically sound, follows from the fact that we can only know what the rules of logic are by comparing them to what people intuitively judge to be logically sound.�

The question remains as to whether this process leads to objective truths � after all, it seems to be relativising rationality to a human consensus�

Cohen�s approach allows us to see degrees of rationality in terms of degrees of objectivity.

He describes the process by which our rational intuitions are honed in a similar way to Nagel�s description of how we move towards objectivity � we take a subjective view, and attempt to step back from it to �form a new conception which has that view and its relation to the world as its object�.

 

Other taxonomies?

adaptive value

amount of time we spend on it

context dependence

practical/theoretical � is this the same as instrumental vs what(???)??? � acting vs inferring???

choosing premises vs arguing from those premises

physical and abstract

 

How do we explain the experimental results

Cohen explains the results of psychological research into human inferential failings as resulting from �either the presentation of the problem, or from subjects' inability to properly encode the logical structure of the task being presented�, both of which are failures of performance rather than competence.

The overall thrust of Cohen's conclusion is that the research on human inferential shortcomings should be construed as showing how subjects can be vulnerable to "cognitive illusions" when problems are presented in unfamiliar ways that interfere with their inferential performance, not as showing that human beings lack the logical competence to deal effectively with reasoning problems, in that they systematically rely on "heuristics" rather than on correct logical rules.(???)

Where does this all leave us?

Nozick

Nozick�s account is attractive in a number of ways. It can be accommodated with minimal metaphysical commitments,

Its price is that it does not really face Nagel head on � Nozick is content to admit that he is not explaining rationality �from first principles�(???) � he is presupposing a degree of rationality in order to consider oneself rationally. And, as I will discuss later, this is the only position that I think we can take as philosophers. On the one hand, we face an empty, skeptical suspension of belief since we recognise that in order to hold any justified beliefs whatsoever, we first require a justified belief about our ability to form such beliefs. And yet, in suspending our belief, we have already recognised that this is the only rational option. In this way, Nagel�s characterisation of �thoughts that we cannot get outside of� is particularly appropriate.

In a way, it�s obvious that we could never monitor our entire brain � with what would we be doing the monitoring? Where can we stand that we can view our position from any position but our own? Can we turn our eyes back upon our own skull (in a more meaningful sense than just the eyeball-rolling party trick)?

So we have little choice but to accept that simply being able to frame the question of one�s own rationality is a sort of base condition for rationality. Doubting is, of necessity, a kind of rational thinking. Descartes� cogito may thus serve instead to bootstrap us (or evidence that we have already boot-strapped) into knowledge of our own rationality.

Perhaps it�s not so much the doubting about questioning one�s own rationality, as simply being able to conceive of rationality at all. Perhaps the complex notion of rationality is its own key. Being able to conceive abstractly of context-independent, formal, generalisable methods and propositions, or perhaps the notions of context-independence, formality, generalisability, method or proposition collectively form the tip of a cognitive framework iceberg comprising a syntax-manipulating, representation-of-representation mind, even a fallible, specialised, evolved one.

 

Incompatibility

My stated intention at the start of this paper was to investigate how easily a naturalistic framework and rational objectivism can accommodate each other. I was hoping and expecting to find that they were incompatible in certain fundamental, ineradicable ways. Given that I feel that we are far better scientists than philosophers, this would have further persuaded me that the reason we disagree on almost all non-empirical issues is that we are not sufficiently rational or powerful thinkers to make real headway in such areas. This would not necessarily be to dismiss out of hand the entire philosophical enterprise, but it would undermine it in those areas where there is no support from other disciplines to provide arbitration in disputes.

As it has turned out, I have found even a restrictive, contemporary naturalistic account to be surprisingly pliable with respect to our rational capacities.

 

Discarded

However, it is at all obvious that this is a linguistic act of reasoning procedure. According to Nagel�s definition of reason as �accessing objectively valid truths� it qualifies as reasoning, but it seems implausible that our performance of the task is a linguistic operation.

Questions

theoretical vs practical???

 



[1] Walt Whitman, �Song Of Myself�:

Do I contradict myself?

Very well then I contradict myself,

(I am large, I contain multitudes.)